31 research outputs found

    The role of learning on industrial simulation design and analysis

    Full text link
    The capability of modeling real-world system operations has turned simulation into an indispensable problemsolving methodology for business system design and analysis. Today, simulation supports decisions ranging from sourcing to operations to finance, starting at the strategic level and proceeding towards tactical and operational levels of decision-making. In such a dynamic setting, the practice of simulation goes beyond being a static problem-solving exercise and requires integration with learning. This article discusses the role of learning in simulation design and analysis motivated by the needs of industrial problems and describes how selected tools of statistical learning can be utilized for this purpose

    Semiconductor manufacturing simulation design and analysis with limited data

    Full text link
    This paper discusses simulation design and analysis for Silicon Carbide (SiC) manufacturing operations management at New York Power Electronics Manufacturing Consortium (PEMC) facility. Prior work has addressed the development of manufacturing system simulation as the decision support to solve the strategic equipment portfolio selection problem for the SiC fab design [1]. As we move into the phase of collecting data from the equipment purchased for the PEMC facility, we discuss how to redesign our manufacturing simulations and analyze their outputs to overcome the challenges that naturally arise in the presence of limited fab data. We conclude with insights on how an approach aimed to reflect learning from data can enable our discrete-event stochastic simulation to accurately estimate the performance measures for SiC manufacturing at the PEMC facility

    Implementing Digital Twins That Learn: AI and Simulation Are at the Core

    No full text
    As companies are trying to build more resilient supply chains using digital twins created by smart manufacturing technologies, it is imperative that senior executives and technology providers understand the crucial role of process simulation and AI in quantifying the uncertainties of these complex systems. The resulting digital twins enable users to replay history, gain predictive visibility into the future, and identify corrective actions to optimize future performance. In this article, we define process digital twins and their four foundational elements. We discuss how key digital twin functions and enabling AI and simulation technologies integrate to describe, predict, and optimize supply chains for Industry 4.0 implementations

    Quantifying input uncertainty in an assemble-to-order system simulation with correlated input variables of mixed types

    No full text
    We consider an assemble-to-order production system where the product demands and the time since the last customer arrival are not independent. The simulation of this system requires a multivariate input model that generates random input vectors with correlated discrete and continuous components. In this paper, we capture the dependence between input variables in an undirected graphical model and decouple the statistical estimation of the univariate input distributions and the underlying dependence measure into separate problems. The estimation errors due to finiteness of the real-world data introduce the so-called input uncertainty in the simulation output. We propose a method that accounts for input uncertainty by sampling the univariate empirical distribution functions via bootstrapping and by maintaining a posterior distribution of the precision matrix that corresponds to the dependence structure of the graphical model. The method improves the coverages of the confidence intervals for the expected profit the per period

    Fitting time series input processes for simulation

    No full text
    doi 10.1287/opre.1040.019

    Input uncertainty in stochastic simulations in the presence of dependent discrete input variables

    No full text
    \u3cp\u3eThis paper considers stochastic simulations with correlated input random variables having NORmal-To-Anything (NORTA) distributions. We assume that the simulation analyst does not know the marginal distribution functions and the base correlation matrix of the NORTA distribution but has access to a finite amount of input data for statistical inference. We propose a Bayesian procedure that decouples the input model estimation into two stages and overcomes the problem of inconsistently estimating the base correlation matrix of the NORTA distribution in the presence of discrete input variables. It further allows us to estimate the variability of the simulation output data that are attributable to the input uncertainty due to not knowing the NORTA distribution. Using this input uncertainty estimate, we introduce a simple yet effective method to obtain input uncertainty adjusted credible intervals. We illustrate our method in an assemble-to-order production system with a correlated demand arrival process.\u3c/p\u3

    Stochastic input model selection

    No full text
    Input modeling is the selection of a probability distribution to capture the uncertainty in the input environment of a stochastic system. Example applications of input modeling include the representation of the randomness in the time to failure for a machining process, the time between arrivals of calls to a call center, and the demand received for a product of an inventory system. Building simulations of stochastic systems requires the development of input models that adequately represent the uncertainty in such random variables. Since there are an abundance of probability distributions that can be used for this purpose, a natural question to ask is how to identify the probability distribution that best represents the particular situation under study. For example, is the exponential distribution a reasonable choice to represent the time to failure for a machining process, or is it better to use an empirical distribution function obtained from the historical time-to-failure data? Recognizing the fact that there is no true input model waiting to be found, the goal of stochastic input modeling is to obtain an approximation that captures the key characteristics of the system inputs.\u3cbr/\u3e\u3cbr/\u3eThe development of a good input model requires the collection of as much information as possible about the relevant randomness in the system as well as the historical data consisting of the past realizations of the random variables of interest. In the presence of a data set, the input model can be identified by fitting a probability distribution to the historical data. However, it may be difficult and/or costly to collect data for the stochastic system under study; it can also be impossible to properly collect any data at all such as when the proposed system does not exist. In the absence of historical data, any relevant information (e.g., expert opinion and the conventional bounds suggested by the underlying physical situation) can be used for input modeling. This article addresses the key issues that arise in stochastic input modeling both in the presence and in the absence of historical data.\u3cbr/\u3e\u3cbr/\u3eThe first step in input modeling is to identify the sources of randomness in the input environment of the system under study. Many stochastic systems contain multiple sources of uncertainty, e.g., the completion time of an item on a particular machine, the potential breakdown of the machine, and the percentage of defective items produced by the machine might be among the sources of uncertainty in a manufacturing setting. Throughout, the random vector X = (X1, X2, …, X K )′ is used to represent the collection of K different inputs of a stochastic system, where X k is the random variable denoting the kth system input. The K components of this random vector might also be correlated with each other. Therefore, the stochastic properties of the random inputs X k , k = 1, 2, …, K, are captured in the joint probability \u3cbr/\u3

    Evaluation of the ARTAFIT method for fitting time-series input processes for simulation

    No full text
    Time-series input processes occur naturally in the stochastic simulation of many service, communications, and manufacturing systems, and there are a variety of time-series input models available to match a given collection of properties, typically a marginal distribution and an autocorrelation structure specified via the use of one or more time lags. The focus of this paper is the situation in which the collection of properties are not “given,” but data are available from which a time-series input model is to be estimated. The input model we consider is the very flexible autoregressive-to-anything (ARTA) model of Cario and Nelson [Cario, M. C., B. L. Nelson. 1996. Autoregressive to anything: Time-series input processes for simulation. Oper. Res. Lett. 19 51–58]. Recently, we developed a statistically valid algorithm (ARTAFIT) for fitting this model to stationary univariate time-series data using marginal distributions from the Johnson translation system. In this paper, we perform a comprehensive numerical study to assess the performance of our algorithm relative to the two most commonly used approaches: (a) fitting the marginal distribution but ignoring the autocorrelation structure, and (b) fitting separately the marginal distribution as in (a) and the autocorrelation structure using the sample autocorrelation function. We find that ARTAFIT, which fits the marginal distribution and the autocorrelation structure jointly, outperforms both (a) and (b), and we demonstrate the importance of taking dependencies into account while developing input models for stochastic simulation
    corecore